perm filename REVIEW[W88,JMC]2 blob
sn#855908 filedate 1988-04-15 generic text, type C, neo UTF8
COMMENT ā VALID 00003 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 review[w88,jmc] Review of The Question of Artificial Intelligence by Bloomfield
C00007 00003 This book is of a genre that treats work in a scientific field
C00032 ENDMK
Cā;
review[w88,jmc] Review of The Question of Artificial Intelligence by Bloomfield
Notes:
Does Shanker assert the impossibility of a a specific performance?
Is there some biological or psychological understanding that he
denies?
It is a pity that Shanker ignores the areas in which AI research
overlaps philosophy and in which AI research has profited from
the work of philosophers. Consider the question of how knowledge
differs from true belief.
Gettier examples.
Frege
Positivism as a doctrine concerning how to construct intelligent
programs.
Philosophers at the crossroads. Will they use their 2,000 years of study
of important questions or will they stand on the sidelines and
see epistemology detached from philosophy?
Some will do one thing. Others will do another.
We don't want to eliminate philosophy from the science of AI ---
at least not so long as there is some hope of getting philosophers
to do some of the work.
knowing that, knowing how, knowing what, knowing whether,
knowing about, all I know is
19 - as measured by the spreading influence of the paradigm. No, by
the fact that a modern chess program can beat S. G. Shanker.
21 - ``It shows that now factual claims have been made.''
queries to look up
Does Kowalski refer to McCarty, so that Leith ought to have noticed?
The baleful influence of Tony Battista.
Even Shannon didn't refer to information theory in his one article
on AI --- the paper on chess.
ALLEN.NEWELL@A.CS.CMU.EDU
history request
In a review of a bad book, The Question of Artificial Intelligence,
I plan to make some remarks about the institutional history of AI including
something like the following.
"AI didn't start as an establishment. Minsky and I were fortunate that the MIT
Research Laboratory of Electronics had a ``joint services contract'' that
permitted its head, Jerome Wiesner, to say yes instantly when we
encountered him in the hall in May 1958 and asked for a secretary, a key
punch, and two programmers. He countered by asking if we wanted to
supervise six graduate students in mathematics whom he had committed
himself to support but had no immediate job for. I believe that the
Newell and Simon work at Rand started in a similar informal way."
Is my conjecture correct or was there a formal proposal to begin your
and Herb's work on complex information processing?
Shannon, C. E., "Programming a digital computer for playing chess,"
Phil. Mag., 1950, 41, 356-375.
Leith seems to be holding up one end of an unseemly squabble over
money. See p. 241.
The alliance among Michie, Longuet-Higgins and Gregory with
Meltzer standing to one side was doomed to fail from the start.
There was never enough scientific compatibility, except possibly
between Meltzer and Michie, where there wasn't personal compabibility.
In discussing the ups and downs of the funding of AI in the U.S.
and the associated changes of emphasis, one should not neglect to
inquire into the views of the successive directors of DARPA and
also into the long term baleful influence of Anthony Battista.
This book is of a genre that treats work in a scientific field
from the standpoint of various social science and humanistic disciplines,
e.g. philosophy, history, sociology, psychology and politics. Scientists
often complain about the results, both generally (judging the whole effort
as wasted) and specifically (citing instances of ignorance and
misunderstanding). We're open minded about the general activity, e.g.
maybe the sociology of research in AI has independent intellectual
interest, though surely less than the field itself, and maybe sociological
observations might cause participants in the field to change the way they
do something, e.g. recognize achievement, define authority and distribute
rewards. This review concerns specific matters, and is mainly negative,
complaining about ignorance and obtuseness.
For one thing the book seems to suffer from the methodology of
so-called {\it critical science} in which one supports ideological
complaints by selected citations. Classical critical science was
generally concerned with (say) ``exposing the bourgeois character
of establishment scientific assumptions''. The radicalism seems
to be faded in this book, but the methodology remains.
The successive chapters are entitled
``AI at the Crossroads'' by S. G. Shanker dealing with philosophy,
``The Culture of AI''by B. P. Bloomfield,
``Development and Establishment in AI'' by J. Fleck,
``Frames of AI'' by J. Schopman,
``Involvement, Detachment and Programming: The Belief in PROLOG''
by P. Leith and
``Expert Systems, AI and the Behavioural Co-ordinates of Skill'' by
H. M. Collins.
``AI at the Crossroads'' suggests an article that might be
entitled ``Some Philosophers at a Crossroads''. The path Shanker is
taking from the crossroads would lead to epistemology and the philosophy
of mind leaving philosophy entirely. AI programs require knowledge and
belief and their construction requires their formalization and scientific
study. Shanker ignores this area in which philosophers and AI researchers
have begun to co-operate and compete. Instead he considers the idea of
artificial intelligence to be a category error of some unintelligble sort.
In particular, it isn't clear whether Shanker argues that there is any
particular activity in which the performance of computer programs is
necessarily inferior to that of humans.
Shanker's 124 notes include no reference to the technical
literature of AI, e.g. no textbook, no articles in {\it Artificial
Intelligence} and no papers in the proceedings of the International
Joint Conferences on AI. This permits him to invent the subject.
For example, he invents and criticizes an ideology of AI in which
what a computer program knows is identified with the measure of
information introduced by Claude Shannon in 1948. I wasn't aware
that I or any significant AI pioneer made that identification, and
it finally occurred to me to check whether even Shannon did. He
didn't. His 1950 paper on ``Programming a Computer for Playing Chess''
cited in Shanker's paper never mentions information.
``The Culture of AI'' argues that the ideas put forth by AI
researchers (and scientists generally) should not be discussed
independently of the culture that developed them. We don't agree with
this, but have no objection to also discussing the culture. A rather
extreme example of considering culture is favorably cited by Bloomfield,
namely Athanasiou's
\item{}``The culture of AI is imperialist and seeks to expand the kingdom of
the machine $\ldots$. The AI community is well organized and well
funded, and its culture fits its dreams: it has high priests, its
greedy businessmen, its canny politicians. The U.S. Department of
Defense is behind it all the way. And like the communists of old,
AI scientists believe in their revolution; the old myths of tragic
hubris don't trouble them at all''.
It's rather hard to get down to discussing declarative vs.
procedural representations or combinatorial explosion after such bombast.
Moreover, whether current expert system technology is capable of
writing programmed assistants for American express authorizers, general
medical practitioners, ``barefoot doctors'' in China or district attorneys
is an objective question, and it doesn't seem that Bloomfield's article
contains any helpful information.
We can't tell whether there is much to say about how the AI
cultural milieu influenced its ideas, because Bloomfield's information
about the AI culture is third hand. There is no sign that he talked to AI
students or researchers himself. Instead he cites the books by Joseph
Weizenbaum and Sherry Turkle. Weizenbaum dislikes the M.I.T. hackers
(it's mutual), AI and otherwise, and confuses hackers with researchers,
groups that only partly overlap. Turkle at least did some well prepared
interviewing of both hackers and researchers. However, she doesn't make
much of a case that the ideas stemmed from the culture per se. Indeed the
originators of many of the ideas were and are non participants in the
informal culture of the AI laboratories.
``Development and Establishment in AI'' contains a lot of
administrative history of AI research institutions and their government
support. The information about Britain is moderately voluminous and more
or less accurate, and the paper contains almost all the references to
actual AI literature that occur in the volume. Its American history is
less accurate. There was no ``Automata Studies'' conference held in 1952.
The volume of that title was composed of papers solicited by mail. The
Dartmouth Summer Project on Artificial Intelligence was not a ``summer
school'', e.g. the participants were not divided, even informally, into
lecturers and students. The Newell-Simon group began its activities about
two years before the Dartmouth conference. It is indeed true that the
pioneers of AI in the U.S. met each other early, formed research groups
that made continued contributions, and became authorities in the field.
It's hard to see how it could have been otherwise. A fuller picture would
mention that there also were also-rans in the history of AI, people whose
ideas did not meet with acceptance and who dropped out.
It should also be remarked that the ``AI establishment'' owes
little to the general ``scientific establishment''. AI would have
developed much more slowly in the U.S. if we had had to persuade the
general run of physicists, mathematicians, biologists, psychologists or
electrical engineers on advisory committees to allow substantial NSF money
to be allocated to AI research. Moreover, the approaches to intelligence
originated by Minsky, Newell, Simon and myself were quite different from
those advocated by Norbert Wiener, John von Neumann or Warren McCulloch.
We deliberately avoided the name cybernetics in order to keep the
distinction clear.
Our good fortune with ARPA is due to its creation with new money
at a time when we were ready to ask for support and very substantially to
the psychologist J. C. R. Licklider. Licklider was on the Air Force
Scientific Advisory Board around 1960 and argued that large command and
control systems were being built with no support for the relevant basic
science. ARPA responded by offering to create an office and budget for
such support if Licklider would agree to head it. In contrast European
AI research long depended on crumbs left by the more established sciences.
Recent PhDs were unable to initiate the research, and the European pioneers
tended to be older people with existing reputations in other fields.
We make a final comment about the Lighthill report. When
a physicist is forced to think about AI he generally reinvents the
subject in his individual way. Some expect it to be easy and others
impossible. Lighthill was in the latter category. In the BBC debate,
McCarthy thought he had a powerful argument and said to Lighthill that if the
physicists hadn't mastered turbulence in 100 years, why should they
expect AI researchers to give up just because they hadn't mastered
AI in 20. Lighthill's reply, which BBC unfortunately didn't include
in the broadcast, was that the physicists should give up on turbulence.
Despite the deficiencies indicated above, the paper shows that
attention to detail does pay off in useful information about history.
``Frames of Artificial Intelligence'' by J. Schopman purports ``to
sketch a close-up of a crucial moment in the history of Artificial
Intelligence (AI), the moment of its genesis in 1956''. Schopman begins
by telling us that ``an exposition will be given of the investigative
method used, SCOST the `Social construction of science and technology'.''
The crucial moment is stated to be the Dartmouth Summer Research Project
on Artificial Intelligence except that Schopman refers to it as a conference
and mixes it up with the {\it Automata Studies} collection of papers. The
papers for that collection were solicited starting in 1952, and the volume
was finally published in 1956. The Dartmouth project did not result in
a publication.
Whatever the SCOST method includes, it evidently doesn't include
either interviewing the participants in the activity (almost all of whom
are still alive and active) or looking for contemporary documents. The
contrast with Herbert Stoyan's work on the history of the LISP programming
language is amazing. Stoyan started his work while still living in East
Germany and unable to travel. Nevertheless, he wrote to everyone involved
in early LISP work, collected all the documents anyone would copy for him
and was able to confront what people told him in letters and interviews
(after he was allowed to emigrate) with what the early documents said.
He eventually came to know more about LISP's early history than any
individual participant. If Schopman or anyone else wants to know what
we had in mind when we proposed the Dartmouth study, he should obtain
a copy of the proposal. If he wants to know why the Rockefeller
Foundation gave us the $7500, he could ask them if anyone there wrote a
memorandum at the time justifying the support.
Old proposals and old granting-agency memoranda documenting their
support decisions are an important unused tool in the recent history of
science. The proposals often say in ways unrecorded in published papers
what the researcher was hoping to accomplish, and the support memoranda
tell what the agency thought it was accomplishing. Old referees' reports
on papers submitted for publication and proposal evaluations provide
another useful source. In the U.S.A., the Freedom of Information Act
provides an important way of find out what people in Government thought
they were doing.
Now let's return to Schopman's actual speculations about what
people were doing. He says that the Dartmouth ``conference'' was ``a result
of the choices made by a group of people who were dissatisfied with the
then-prevailing scientific way of studying human behaviour. They
considered their approach as radically different, a revolution ---
the so-called `cognitive revolution'.'' Schopman has made all that up.
The proposal for the Dartmouth conference, as I remember having
written it, contains no criticism of anybody's way of studying human
behavior, because I didn't consider it relevant. As suggested by the term
``artificial intelligence'' we weren't considering human behavior except
as a clue to possible effective ways of doing tasks. The only
participants interested in human behavior were Newell and Simon, and they
didn't discuss it much in that forum. There were no lectures on human
behavior at Dartmouth that summer. Also, as far as I remember, the phrase
`cognitive revolution' came into use at least ten years later.
For this reason, whatever revolution there may have been around
the time of the Dartmouth Project was to get entirely away from studying
human behavior and to consider the computer as a tool. Thus AI was
created as a branch of computer science and not as a branch of
psychology. Newell and Simon continued to be interested in both
AI as computer science and AI as psychology, but they were somewhat
exceptional in this.
Schopman mentions many influences of earlier work on AI
pioneers. I can report that many of them didn't influence
me except negatively, but in order to settle the matter of influences
it would be necessary to actually ask (say) Minsky and Newell and
Simon. As for myself, one of the reasons for inventing the term
``artificial intelligence'' was to escape association with ``cybernetics''.
Its concentration on analog feedback seemed misguided, and I wished
to avoid having either to accept Norbert (not Robert) Wiener as
a guru or having to argue with him. (By the way I assume that
the ``Walter Gibbs'' Schopman refers to as having influenced Wiener
is actually Josiah Willard Gibbs).
Schopman paints a picture of the intellectual situation in 1956
based on the publications of many people who wrote before that year.
Maybe that was the intellectual situation for many, but I suspect the
situation was more fragmented than that, many people hadn't read the
papers Schopman identifies as influential. For example, the idea that
programming computers rather than building machines was the key to AI
received its first emphasis at the Dartmouth meeting. None of von Neumann
(surprisingly), Wiener, McCulloch, Ashby and MacKay thought in those
terms. However, by the time of Dartmouth, Newell and Simon, Samuel and
Bernstein had already written programs. McCarthy and Minsky expressed
their 1956 ideas as proposals for programs, although their earlier work
had not assumed programmable computers. However, Alan Turing had already
made the point that AI was a matter of programming computers in his 1950
article ``Computing Machinery and Intelligence'' in the British philosophy
journal {\it Mind}. When I asked (maybe in 1979) in a historical panel
who had read the paper, I got negative answers. The paper only became
well known after James R. Newman reprinted it in his 1956 {\it The World
of Mathematics}. Actual influences depend on what is actually read.
Finally, there is Schopman's chart that associates AI frames
(paradigms) with periods. In no way did these ``paradigms'' dominate
work in the eras considered. There were, however, substantial shifts
in emphasis at various times since the Dartmouth conference. Someone
studying this will need to subdivide the AI ``paradigm'' in order
to say which ``subparadigms'' were popular at different times. One
way to study this would be to classify PhD theses and IJCAI
papers and count them. (Even sociologists can count).
``Involvement, Detachment and Programming: The Belief in
Prolog'' by Philip Leith treats the enthusiasm for Prolog as a
sociological phenomenon akin to the 16th century Ramist movement
in the logic and rhetoric of law. The Britannica article on
rhetoric says it emphasized figures of speech. I wasn't convinced
that it had much analogy to Prolog. Leith's complaint that Kowalski's
work on expressing the British Nationality Act in logic programming was supported
by the wrong Research Council leads this American to speculate that
quarrels about money and turf are being reflected.